decision system
AIhub monthly digest: March 2026 – time series, multiplicity, and the history of RoboCup
Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we delved into the history of RoboCup, learned about time series, studied multiplicity, and found out more about Theory of Mind. RoboCup is an international competition that promotes and advances robotics and AI through the challenges presented by its various leagues. We got the chance to sit down with Professor Manuela Veloso, one of RoboCup's founders, to find out more about how it all started, how the community has grown over the years, and the vision for the future. What we've learned from 25 years of automated science, and what the future holds We're excited to launch a new series, where we'll be speaking with leading researchers to explore the breakthroughs driving AI and the reality of the future promises, to give you an inside perspective on the headlines.
- North America > United States > California (0.15)
- Asia > Singapore (0.05)
#AAAI2026 invited talk: machine learning for particle physics
Daniel Whiteson is a particle physicist, who uses machine learning and statistical tools to analyze high-energy particle collisions. He is also a dedicated science communicator, having published books and comics, and is co-host of a science podcast. In his invited talk at the Fortieth AAAI Conference on Artificial Intelligence (AAAI-26), Daniel shared insights on both these aspects of his career. Daniel works at the Large Hadron Collider (LHC) at CERN, primarily looking at proton-proton collisions, which occur at 13 TeV, a massive 13,000 times the energy stored in a single proton. The majority of collisions result in known particles, such as electrons or muons.
Water flow in prairie watersheds is increasingly unpredictable -- but AI could help
In recent years, the Prairies have seen bigger swings in climate conditions -- very wet years followed by very dry ones. That makes an already unpredictable landscape even harder to forecast, with real consequences for flood preparedness and water quality. The challenge is the landscape itself. Much of the Canadian Prairies sit within the Prairie Pothole Region, a landscape dotted with millions of shallow wetlands and depressions. Water doesn't simply run downhill into a stream, it is stored first.
- North America > Canada > Alberta (0.15)
- North America > Canada > British Columbia (0.05)
- North America > Canada > Saskatchewan (0.05)
- North America > Canada > Manitoba (0.05)
2026 AI Index Report released
The ninth edition of the Artificial Intelligence Index Report was published on 13 April 2026. Released on a yearly basis, the aim of the document is to provide readers with accurate, rigorously validated, and globally-sourced data to give insights into the progress of AI and its potential impact on society. The 2026 AI Index Report comprises nine chapters, covering: research and development, technical performance, responsible AI, economy, science, medicine, education, policy and governance, and public opinion. AI capability is accelerating and reaching more people than ever. Model performance continues to improve against benchmarks, and 80% of university students now use generative AI.
- North America > United States (0.12)
- Asia > China (0.06)
- Asia > South Korea (0.05)
- Information Technology > Artificial Intelligence > Natural Language (0.72)
- Information Technology > Artificial Intelligence > Machine Learning (0.71)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.62)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.52)
Forthcoming machine learning and AI seminars: April 2026 edition
This post contains a list of the AI-related seminars that are scheduled to take place between 2 April and 31 May 2026. All events detailed here are free and open for anyone to attend virtually. What Do Our Benchmarks Actually Measure? Vukosi Marivate (University of Pretoria) University of Michigan Zoom link is here . Optimization Over Trained Neural Networks: What, Why, and How? Thiago Serra Azevedo Silva (University of Iowa) Association of European Operational Research Societies To receive the seminar link, sign up to the mailing list .
- North America > United States > Michigan (0.26)
- North America > United States > Iowa (0.25)
- Africa > South Africa > Gauteng > Pretoria (0.25)
- (4 more...)
'Probably' doesn't mean the same thing to your AI as it does to you
'Probably' doesn't mean the same thing to your AI as it does to you When a human says an event is "probable" or "likely," people generally have a shared, if fuzzy, understanding of what that means. But when an AI chatbot like ChatGPT uses the same word, it's not assessing the odds the way we do, my colleagues and I found. We recently published a study in the journal NPJ Complexity that suggests that, while large language model AIs excel at conversation, they often fail to align with humans when communicating uncertainty . The research focused on words of estimative probability, which include terms like "maybe," "probably" and "almost certain." By comparing how AI models and humans map these words to numerical percentages, we uncovered significant gaps between humans and large language models.
A model for defect identification in materials
In biology, defects are generally bad. But in materials science, defects can be intentionally tuned to give materials useful new properties. Today, atomic-scale defects are carefully introduced during the manufacturing process of products like steel, semiconductors, and solar cells to help improve strength, control electrical conductivity, optimize performance, and more. But even as defects have become a powerful tool, accurately measuring different types of defects and their concentrations in finished products has been challenging, especially without cutting open or damaging the final material. Without knowing what defects are in their materials, engineers risk making products that perform poorly or have unintended properties.
Causal models for decision systems: an interview with Matteo Ceriscioli
How do you go about integrating causal knowledge into decision systems or agents? We sat down with Matteo Ceriscioli to find out about his research in this space. This interview is the latest in our series featuring the AAAI/SIGAI Doctoral Consortium participants. Could you start by telling us a bit about your PhD - where are you studying, and what's the broad topic of your research? The idea is to integrate causal knowledge into agents or decision systems to make them more reliable.
- North America > United States > Oregon (0.05)
- Asia > Japan (0.05)
- Europe > Germany (0.05)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.50)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.47)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Diagnosis (0.44)
Policy Optimization with Advantage Regularization for Long-Term Fairness in Decision Systems
Long-term fairness is an important factor of consideration in designing and deploying learning-based decision systems in high-stake decision-making contexts. Recent work has proposed the use of Markov Decision Processes (MDPs) to formulate decision-making with long-term fairness requirements in dynamically changing environments, and demonstrated major challenges in directly deploying heuristic and rule-based policies that worked well in static environments. We show that policy optimization methods from deep reinforcement learning can be used to find strictly better decision policies that can often achieve both higher overall utility and less violation of the fairness requirements, compared to previously-known strategies. In particular, we propose new methods for imposing fairness requirements in policy optimization by regularizing the advantage evaluation of different actions. Our proposed methods make it easy to impose fairness constraints without reward engineering or sacrificing training efficiency. We perform detailed analyses in three established case studies, including attention allocation in incident monitoring, bank loan approval, and vaccine distribution in population networks.